84 research outputs found

    Value and prediction error in medial frontal cortex: integrating the single-unit and systems levels of analysis

    Get PDF
    The role of the anterior cingulate cortex (ACC) in cognition has been extensively investigated with several techniques, including single-unit recordings in rodents and monkeys and EEG and fMRI in humans. This has generated a rich set of data and points of view. Important theoretical functions proposed for ACC are value estimation, error detection, error-likelihood estimation, conflict monitoring, and estimation of reward volatility. A unified view is lacking at this time, however. Here we propose that online value estimation could be the key function underlying these diverse data. This is instantiated in the reward value and prediction model (RVPM). The model contains units coding for the value of cues (stimuli or actions) and units coding for the differences between such values and the actual reward (prediction errors). We exposed the model to typical experimental paradigms from single-unit, EEG, and fMRI research to compare its overall behavior with the data from these studies. The model reproduced the ACC behavior of previous single-unit, EEG, and fMRI studies on reward processing, error processing, conflict monitoring, error-likelihood estimation, and volatility estimation, unifying the interpretations of the role performed by the ACC in some aspects of cognition

    Maximized likelihood ratio tests for functional localization in fMRI

    Get PDF
    fMRI localizer tasks are often used to define subject specific functional regions of interest (fROIs) that contain the relevant features for subsequent analyses. fROIs are typically small and show large interindividual differences in extent and effect size. As statistical testing procedures focus on con- trolling false positives, this may lead to an ad-hoc adjustment of thresholding in some individuals. The promising likelihood ratio (LR) testing approach for fMRI (Kang et al. 2015) provides simultaneous control of both false positives and negatives by contrasting evidence in favor of true activation against evidence in favor of the null hypothesis. The authors propose to estimate the expected alternative by a percentile (e.g. 95th) across the voxels of an effect size map. However, in the context of fROIs, pre-defined observed percentiles may induce inconsistent activation across subjects. In this study we show the potential of a maximized LR approach (Bickel, 2012) for this particular application. The maximum LR is calculated over the same interval of functionally relevant alternatives for all subjects, enabling consistent localization of the fROIs in both subjects with low levels and high levels of general activity

    Assessing publication bias in coordinate-based meta-analysis techniques?

    Get PDF
    Introduction While publications of fMRI studies have flourished, it is increasingly recognized that progress in understanding human brain function will require integration of data across studies using meta-analyses. In general, results that do not reach statistical significance are less likely to be published and included in a meta-analysis. Meta-analyses of fMRI studies are prone to this publication bias when studies are excluded because they fail to show activation in specific regions. Further, some studies only report a limited amount of peak voxels that survive a statistical threshold resulting in an enormous loss of data. Coordinate-based toolboxes have been specifically developed to combine the available information of such studies in a meta-analysis. Potential publication bias then stems from two sources: exclusion of studies and missing voxel information within studies. In this study, we focus on the assessment of the first source of bias in coordinate-based meta-analyses. A measure of publication bias indicates the degree to which the analysis might be distorted and helps to interpret results. We propose an adaptation of the Fail-Safe N (FSN; Rosenthal, 1979). The FSN reflects the number of null studies, i.e. studies without activation in a target region, that can be added to an existing meta-analysis without altering the result for the target region. A large FSN indicates robustness of the effect against publication bias. On the other hand, in this context, a FSN that is too large indicates that a small amount of studies might drive the entire analysis. Method We simulated 1000 simplistic meta-analyses, each consisting of 3 studies with real activation in a target area (quadrant 1 in Figure 1) and up to 100 null studies with activation in the remaining 3 quadrants. We calculated the FSN as the number of null studies (with a maximum of 100) that can be added to the original meta-analysis of 3 studies without altering the results for the target area. Meta-analyses were conducted with ALE (Eickhoff et al., 2009; 2012; Turkeltaub et al., 2012).  We computed the FSN using an uncorrected threshold (α = 0.001) and 2 versions of a False Discovery Rate (FDR) threshold (q = 0.05), FDR pID (which assumes independence or positive dependence between test statistics) and FDR pN (which makes no assumptions and is more conservative). We varied the average sample size n of the individual studies, from small (n≈10), to medium (n≈20) and large (n≈30). Results Results are summarised in Figure 2 and visually presented in Figure 3. We find a large difference in average FSN between the different thresholding methods. In case of uncorrected thresholding, the target region remains labeled as active while only 3% of the studies in the meta-analysis report activation at that location.  Further, if the sample size of the individual studies in the meta-analysis increases, the FSN decreases. Conclusions The FSN varies largely across thresholding methods and sample sizes. Uncorrected thresholding allows for the analysis to be driven by a small amount of studies and is therefore counter-indicated. While a decreasing FSN with increasing sample size might be counterintuitive in terms of robustness, it indicates that the analysis is less prone to be driven by a small number of studies. Publication bias assessment methods can be a valuable add-on to existing toolboxes for interpretation of meta-analytic results. In future work, we will extend our research to other methods for the assessment of publication bias, such as the Egger Test  (Egger et al., 1997) and test for excess of success (Francis, 2014). References Egger, M., Davey Smith, G., Schneider, M., and Minder, C. (1997), ‘Bias in meta-analysis detected by a simple, graphical test’, British Medical Journal, vol. 315, pp. 629-634. Eickhoff, S.B., Laird, A.R., Grefkes, C., Wang, L.E., Zilles, K., and Fox, P.T. (2009), ‘Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty’, Human Brain Mapping, vol. 30, pp. 2907-2926. Eickhoff, S.B., Bzdok, D., Laird, A.R., Kurth, F., and Fox, P.T. (2012), ‘Activation likelihood estimation revisited’, Neuroimage, vol. 59, pp. 2349-2361. Francis, G. (2014), ‘The frequency of excess success for articles in Psychological Science’, Psychonomic Bulletin and Review, vol. 21, no. 5, pp. 1180-1187. Rosenthal, R. (1979), ‘The file drawer problem and tolerance for null results’, Psychological Bulletin, vol. 86, no. 3, pp. 638–641. Turkeltaub, P.E., Eickhoff, S.B., Laird, A.R., Fox, M., Wiener, M., and Fox, P. (2012), ‘Minimizing within-experiment and within-group effects in activation likelihood estimation meta-analyses’, Human Brain Mapping, vol. 33, pp. 1-13

    The influence of the noradrenergic system on optimal control of neural plasticity

    Get PDF
    Decision making under uncertainty is challenging for any autonomous agent. The challenge increases when the environment’s stochastic properties change over time, i.e., when the environment is volatile. In order to efficiently adapt to volatile environments, agents must primarily rely on recent outcomes to quickly change their decision strategies; in other words, they need to increase their knowledge plasticity. On the contrary, in stable environments, knowledge stability must be preferred to preserve useful information against noise. Here we propose that in mammalian brain, the locus coeruleus (LC) is one of the nuclei involved in volatility estimation and in the subsequent control of neural plasticity. During a reinforcement learning task, LC activation, measured by means of pupil diameter, coded both for environmental volatility and learning rate. We hypothesize that LC could be responsible, through norepinephrinic modulation, for adaptations to optimize decision making in volatile environments. We also suggest a computational model on the interaction between the anterior cingulate cortex (ACC) and LC for volatility estimation

    Mental rotation meets the motion aftereffect: the role of hV5/MT+ in visual mental imagery

    Get PDF
    A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects performed a mental rotation task that elicits imagined motion. We concurrently measured behavioral performance and neural activity with fMRI, enabling us to directly assess the effect of a perturbation of hV5/MT+ on other cortical areas involved in the mental rotation task. The activity in hV5/MT+ increased as more mental rotation was required, and the perturbation of hV5/MT+ affected behavioral performance as well as the neural activity in this area. Moreover, several regions in the posterior parietal cortex were also affected by this perturbation. Our results show that hV5/MT+ is required for imagined visual motion and engages in an interaction with parietal cortex during this cognitive process

    A note on likelihood ratio testing for average error control

    Get PDF
    In an fMRI analysis, testing activation in each of over 100,000 voxels induces a huge multiple testing problem. To guard against an explosion of false positives (FPs), thresholding is made very conservative but comes at the price of a problematic increase in false negatives (FNs). In their recent paper, Kang et al. (2015) propose a likelihood ratio (LR) approach that contrasts evidence in favor of true activation against evidence in favor of the null. They show how the likelihood paradigm (LP) controls average FP and FN error rates, decreasing FNs with only a slight increase in FPs. Their work is promising and a welcome contribution to the development of methods that not solely focus on classical null hypothesis testing but also take into account practical relevance. The authors acknowledge that the approach is specific to the effect size (ES) specified under the alternative and point out that this requires further research. They show that choosing an ES equal to a percentile between 90th and 99th of the contrast of interest is a possibility. In this note, we study the impact of choosing this ES in more detail. First want to raise awareness that the value of percentiles of estimated contrast values is highly dependent on the proportion of the brain activated by the task. For example, within the context of localizer tasks, we expect to pinpoint a single brain region and hence a small activated volume. The 95 percentile value would then lead to an underestimation of the true ES, since the ES of active voxels will be at the right tail of the ESs. Secondly, the LP measures evidence between two simple hypotheses (Blume, 2002). This requires valid estimation of the specified ES as both under- and overestimation of the true ES will result in a reduced LR for active voxels. Methods We simulated single subject contrast values maps (resolution: 32 × 32; voxel size 1mm × 1mm) with a proportion of q active voxels (ES = 2.5% BOLD). Gaussian noise with a standard deviation of 8 was added to the image, resulting in a CNR of 0.32. The LR for each voxel was calculated as the likelihood of the data under the simple alternative with a specified ES divided by the likelihood of the data under the null. First, we let q vary from 0.01 to 0.99 and in each step used the 95th percentile of the estimated contrasts as the specified ES. We demonstrate the effect of a varying q on the LR of one active voxel. Second, we set q = 0.098 and let the specified ES vary from 0.5% to 4.5% BOLD. For dichotomous LP (dLP), all voxels having a LR larger or equal to k were retained. For continuous LP (cLP), inactive voxels had a LR smaller or equal to 1/k, active voxels had a LR larger or equal to k and for weak evidence voxels, the LR was situated between 1/k and k. Using contrast maps, we demonstrate the effect of under- or overestimating the true ES for k = 8. Results We show how the LR varies in function of the proportion of active voxels through variation in the specified ES. Evidence for activation is only convincing if the specified ES is close to the true ES (2.5% BOLD). Additionally, misspecifying the alternative hypothesis reduces the LR of the active voxels, resulting in more FNs. For cLP, many null voxels exhibit weak evidence when underestimating the true ES. Conclusions Kang et al. (2015) present a valuable approach for simultaneous control of error rates in fMRI data analysis. Our results demonstrate the importance of a correct specification of the alternative hypothesis. Voxels with an ES higher than the specified ES may exhibit a low LR and hence may show inconclusive evidence. Further research is needed to study the possible ES choices and the use of this ES to evaluate evidence for activation

    Fixed versus random effects models for fMRI meta-analysis

    Get PDF
    Meta analyses for brain imaging are gaining attention given the increasing amount of published fMRI studies and the need for synthesis and integration of data across studies and labs. Standard meta analyses are not well adapted to cope with huge amount of data that are summarized by locating peaks of brain activation. Our goal is to evaluate the basics of existing methods for fMRI meta analysis and we thereby focus on the distinction between fixed and random effects models

    Adaptive smoothing as inference strategy: More specificity for unequally sized or neighboring regions

    Get PDF
    Although spatial smoothing of fMRI data can serve multiple purposes, increasing the sensitivity of activation detection is probably its greatest benefit. However, this increased detection power comes with a loss of specificity when non-adaptive smoothing (i.e.\ the standard in most software packages) is used. Simulation studies and analysis of experimental data was performed using the R packages neuRosim and fmri. In these studies, we systematically investigated the effect of spatial smoothing on the power and number of false positives in two particular cases that are often encountered in fMRI research: (1) Single condition activation detection for regions that differ in size, and (2) multiple condition activation detection for neighbouring regions. Our results demonstrate that adaptive smoothing is superior in both cases because less false positives are introduced by the spatial smoothing process compared to standard Gaussian smoothing or FDR inference of unsmoothed data
    • …
    corecore